The excellent performance of deep neural networks has enabled us to solveseveral automatization problems, opening an era of autonomous devices. However,current deep net architectures are heavy with millions of parameters andrequire billions of floating point operations. Several works have beendeveloped to compress a pre-trained deep network to reduce memory footprintand, possibly, computation. Instead of compressing a pre-trained network, inthis work, we propose a generic neural network layer structure employingmultilinear projection as the primary feature extractor. The proposedarchitecture requires several times less memory as compared to the traditionalConvolutional Neural Networks (CNN), while inherits the similar designprinciples of a CNN. In addition, the proposed architecture is equipped withtwo computation schemes that enable computation reduction or scalability.Experimental results show the effectiveness of our compact projection thatoutperforms traditional CNN, while requiring far fewer parameters.
展开▼